Linear programming formulation for non-stationary, finite-horizon Markov decision process models

نویسندگان

  • Arnab Bhattacharya
  • Jeffrey P. Kharoufeh
چکیده

Linear programming (LP) formulations are often employed to solve stationary, infinitehorizon Markov decision process (MDP) models. We present an LP approach to solving nonstationary, finite-horizon MDP models that can potentially overcome the computational challenges of standard MDP solution procedures. Specifically, we establish the existence of an LP formulation for risk-neutral MDP models whose states and transition probabilities are temporally heterogeneous. This formulation can be recast as an approximate linear programming formulation with significantly fewer decision variables.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Linear Programming Approach to Nonstationary Infinite-Horizon Markov Decision Processes

Nonstationary infinite-horizon Markov decision processes (MDPs) generalize the most well-studied class of sequential decision models in operations research, namely, that of stationaryMDPs, by relaxing the restrictive assumption that problem data do not change over time. Linearprogramming (LP) has been very successful in obtaining structural insights and devising solutionmeth...

متن کامل

A stochastic programming approach for planning horizons of infinite horizon capacity planning problems

Planning horizon is a key issue in production planning. Different from previous approaches based on Markov Decision Processes, we study the planning horizon of capacity planning problems within the framework of stochastic programming. We first consider an infinite horizon stochastic capacity planning model involving a single resource, linear cost structure, and discrete distributions for genera...

متن کامل

Finite-Horizon Markov Decision Processes with State Constraints

Markov Decision Processes (MDPs) have been used to formulate many decision-making problems in science and engineering. The objective is to synthesize the best decision (action selection) policies to maximize expected rewards (minimize costs) in a given stochastic dynamical environment. In many practical scenarios (multi-agent systems, telecommunication, queuing, etc.), the decision-making probl...

متن کامل

Risk-Sensitive Control of Markov Decision Processes

This paper introduces an algorithm to determine near-optimal control laws for Markov Decision Processes with a risk-sensitive criterion. Both the fully observed and the partially observed settings are considered, for nite and innnite horizon formulations. Dynamic programming equations are introduced which characterize the value function for the partially observed, innnite horizon , discounted c...

متن کامل

Persistently Optimal Policies in Stochastic Dynamic Programming with Generalized Discounting

In this paper we study a Markov decision process with a non-linear discount function. Our approach is in spirit of the von Neumann-Morgenstern concept and is based on the notion of expectation. First, we define a utility on the space of trajectories of the process in the finite and infinite time horizon and then take their expected values. It turns out that the associated optimization problem l...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Oper. Res. Lett.

دوره 45  شماره 

صفحات  -

تاریخ انتشار 2017